70 research outputs found

    A Unified Model of Spatiotemporal Processing in the Retina

    Full text link
    A computational model of visual processing in the vertebrate retina provides a unified explanation of a range of data previously treated by disparate models. Three results are reported here: the model proposes a functional explanation for the primary feed-forward retinal circuit found in vertebrate retinae, it shows how this retinal circuit combines nonlinear adaptation with the desirable properties of linear processing, and it accounts for the origin of parallel transient (nonlinear) and sustained (linear) visual processing streams as simple variants of the same retinal circuit. The retina, owing to its accessibility and to its fundamental role in the initial transduction of light into neural signals, is among the most extensively studied neural structures in the nervous system. Since the pioneering anatomical work by Ramón y Cajal at the turn of the last century[1], technological advances have abetted detailed descriptions of the physiological, pharmacological, and functional properties of many types of retinal cells. However, the relationship between structure and function in the retina is still poorly understood. This article outlines a computational model developed to address fundamental constraints of biological visual systems. Neurons that process nonnegative input signals-such as retinal illuminance-are subject to an inescapable tradeoff between accurate processing in the spatial and temporal domains. Accurate processing in both domains can be achieved with a model that combines nonlinear mechanisms for temporal and spatial adaptation within three layers of feed-forward processing. The resulting architecture is structurally similar to the feed-forward retinal circuit connecting photoreceptors to retinal ganglion cells through bipolar cells. This similarity suggests that the three-layer structure observed in all vertebrate retinae[2] is a required minimal anatomy for accurate spatiotemporal visual processing. This hypothesis is supported through computer simulations showing that the model's output layer accounts for many properties of retinal ganglion cells[3],[4],[5],[6]. Moreover, the model shows how the retina can extend its dynamic range through nonlinear adaptation while exhibiting seemingly linear behavior in response to a variety of spatiotemporal input stimuli. This property is the basis for the prediction that the same retinal circuit can account for both sustained (X) and transient (Y) cat ganglion cells[7] by simple morphological changes. The ability to generate distinct functional behaviors by simple changes in cell morphology suggests that different functional pathways originating in the retina may have evolved from a unified anatomy designed to cope with the constraints of low-level biological vision.Sloan Fellowshi

    A Nonlinear Model of Spatiotemporal Retinal Processing: Simulations of X and Y Retinal Ganglion Cell Behavior

    Full text link
    This article describes a nonlinear model of neural processing in the vertebrate retina, comprising model photoreceptors, model push-pull bipolar cells, and model ganglion cells. Previous analyses and simulations have shown that with a choice of parameters that mimics beta cells, the model exhibits X-like linear spatial summation (null response to contrast-reversed gratings) in spite of photoreceptor nonlinearities; on the other hand, a choice of parameters that mimics alpha cells leads to Y-like frequency doubling. This article extends the previous work by showing that the model can replicate qualitatively many of the original findings on X and Y cells with a fixed choice of parameters. The results generally support the hypothesis that X and Y cells can be seen as functional variants of a single neural circuit. The model also suggests that both depolarizing and hyperpolarizing bipolar cells converge onto both ON and OFF ganglion cell types. The push-pull connectivity enables ganglion cells to remain sensitive to deviations about the mean output level of nonlinear photoreceptors. These and other properties of the push-pull model are discussed in the general context of retinal processing of spatiotemporal luminance patterns.Alfred P. Sloan Research Fellowship (BR-3122); Air Force Office of Scientific Research (F49620-92-J-0499

    A Neural Network Model for the Spatial and Temporal Response of Retinal Ganglion Cells

    Full text link
    This article introduces a quantitative model of early visual system function. The model is formulated to unify analyses of spatial and temporal information processing by the nervous system. Functional constraints of the model suggest mechanisms analogous to photoreceptors, bipolar cells, and retinal ganglion cells, which can be formally represented with first order differential equations. Preliminary numerical simulations and analytical results show that the same formal mechanisms can explain the behavior of both X (linear) and Y (nonlinear) retinal ganglion cell classes by simple changes in the relative width of the receptive field (RF) center and surround mechanisms. Specifically, an increase in the width of the RF center results in a change from X-like to Y-like response, in agreement with anatomical data on the relationship between α- and -cell RF profiles. Simulations of model response to various spatio-temporal input patterns replicate many of the classical properties of X and Y cells, including transient (Y) versus sustained (X) responses, null-phase responses to alternating gratings in X cells, on-off or frequency doubling responses in Y cells, and phase-independent on-off responses in Y cells at high spatial frequencies. The model's formal mechanisms may be used in other portions of the visual system and more generally in nervous system structures involved with spatio-temporal information processing

    Motivation

    Full text link
    The ability of humans and animals to survive in a constantly changing environment is a testament to the power of biological processes. At any given instant in our lives, we are faced with an enormous number of sensory stimuli, and we can typically generate an equally large number of behaviors. How do we learn to ignore irrelevant information and suppress inappropriate behavior so that we may function in a complex environment? In this chapter we discuss motivation, the internal force that produces actions reflecting the interactions between our needs and the demands of our environment. We will first discuss what psychologists mean when they refer to motivation, and then review neural network theories that can expbin how motivation arises within biological nervous systems.Sloan Fellowship (BR-3122): Air Force Office of Scientific Research (F49620-92-J-0499, F49620-92-J-0334

    Mobile Robot Range Sensing through Visual Looming

    Get PDF
    This article describes and evaluates visual looming as a monocular range sensing method for mobile robots. The looming algorithm is based on the relationship between the displacement of a camera relative to an object, and the resulting change in the size of the object's image on the focal plane of the camera. We have carried out systematic experiments to evaluate the ranging accuracy of the looming algorithm using a Pioneer I mobile robot equipped with a color camera. We have also performed noise sensitivity for the looming algorithm, obtaining theoretical error bounds on the range estimates for given levels of odometric and visual noise, which were verified through experimental data. Our results suggest that looming can be used as a robust, inexpensive range sensor as a complement to sonar.Defense Advanced Research Projects Agency; Office of Naval Research; Navy Research Laboratory (00014-96-1-0772, 00014-95-1-0409

    Application of Biological Learning Theories to Mobile Robot Avoidance and Approach Behaviors

    Full text link
    We present a neural network that learns to control approach and avoidance behaviors in a mobile robot using the mechanisms of classical and operant conditioning. Learning, which requires no supervision, takes place as the robot moves around an environment cluttered with obstacles and light sources. The neural network requires no knowledge of the geometry of the robot or of the quality, number or configuration of the robot's sensors. In this article we provide a detailed presentation of the model, and show our results with the Khepera and Pioneer 1 mobile robots.Office of Naval Research (N00014-96-1-0772, N00014-95-1-0409

    Mobile Robot Range Sensing through Visual Looming

    Full text link
    This article describes and evaluates visual looming as a monocular range sensing method for mobile robots. The looming algorithm is based on the relationship between the displacement of a camera relative to an object, and the resulting change in the size of the object's image on the focal plane of the camera. We have carried out systematic experiments to evaluate the ranging accuracy of the looming algorithm using a Pioneer I mobile robot equipped with a color camera. We have also performed noise sensitivity for the looming algorithm, obtaining theoretical error bounds on the range estimates for given levels of odometric and visual noise, which were verified through experimental data. Our results suggest that looming can be used as a robust, inexpensive range sensor as a complement to sonar.Defense Advanced Research Projects Agency; Office of Naval Research; Navy Research Laboratory (00014-96-1-0772, 00014-95-1-0409

    Vector Associative Maps: Unsupervised Real-time Error-based Learning and Control of Movement Trajectories

    Full text link
    This article describes neural network models for adaptive control of arm movement trajectories during visually guided reaching and, more generally, a framework for unsupervised real-time error-based learning. The models clarify how a child, or untrained robot, can learn to reach for objects that it sees. Piaget has provided basic insights with his concept of a circular reaction: As an infant makes internally generated movements of its hand, the eyes automatically follow this motion. A transformation is learned between the visual representation of hand position and the motor representation of hand position. Learning of this transformation eventually enables the child to accurately reach for visually detected targets. Grossberg and Kuperstein have shown how the eye movement system can use visual error signals to correct movement parameters via cerebellar learning. Here it is shown how endogenously generated arm movements lead to adaptive tuning of arm control parameters. These movements also activate the target position representations that are used to learn the visuo-motor transformation that controls visually guided reaching. The AVITE model presented here is an adaptive neural circuit based on the Vector Integration to Endpoint (VITE) model for arm and speech trajectory generation of Bullock and Grossberg. In the VITE model, a Target Position Command (TPC) represents the location of the desired target. The Present Position Command (PPC) encodes the present hand-arm configuration. The Difference Vector (DV) population continuously.computes the difference between the PPC and the TPC. A speed-controlling GO signal multiplies DV output. The PPC integrates the (DV)·(GO) product and generates an outflow command to the arm. Integration at the PPC continues at a rate dependent on GO signal size until the DV reaches zero, at which time the PPC equals the TPC. The AVITE model explains how self-consistent TPC and PPC coordinates are autonomously generated and learned. Learning of AVITE parameters is regulated by activation of a self-regulating Endogenous Random Generator (ERG) of training vectors. Each vector is integrated at the PPC, giving rise to a movement command. The generation of each vector induces a complementary postural phase during which ERG output stops and learning occurs. Then a new vector is generated and the cycle is repeated. This cyclic, biphasic behavior is controlled by a specialized gated dipole circuit. ERG output autonomously stops in such a way that, across trials, a broad sample of workspace target positions is generated. When the ERG shuts off, a modulator gate opens, copying the PPC into the TPC. Learning of a transformation from TPC to PPC occurs using the DV as an error signal that is zeroed due to learning. This learning scheme is called a Vector Associative Map, or VAM. The VAM model is a general-purpose device for autonomous real-time error-based learning and performance of associative maps. The DV stage serves the dual function of reading out new TPCs during performance and reading in new adaptive weights during learning, without a disruption of real-time operation. YAMs thus provide an on-line unsupervised alternative to the off-line properties of supervised error-correction learning algorithms. YAMs and VAM cascades for learning motor-to-motor and spatial-to-motor maps are described. YAM models and Adaptive Resonance Theory (ART) models exhibit complementary matching, learning, and performance properties that together provide a foundation for designing a total sensory-cognitive and cognitive-motor autonomous system.National Science Foundation (IRI-87-16960, IRI-87-6960); Air Force Office of Scientific Research (90-0175); Defense Advanced Research Projects Agency (90-0083

    A Collection of Art-Family Graphical Simulations

    Full text link
    The Adaptive Resonance Theory (ART) architecture, first proposed by (Grossberg, 1976b, 1976a), is a self-organizing neural network for stable pattern categorization in response to arbitrary input sequences. Since its original formulation, several versions of ART have been proposed, each designed to handle a particular task or input format. Recent ART architectures have been designed to work in a supervised fashion, offering a viable alternative to supervised neural networks such as backpropagation (Rumelhart, Hinton, & Williams, 1986). Perhaps the best-known variant of ART is ART2 (Carpenter & Grossberg, 1987b), an unsupervised neural network that handles analog inputs. We have developed a series of simulators for some of the ART-family neural architectures, namely, ART2 (Carpenter & Grossberg, 1987b), ART2-A (Carpenter, Grossberg, & Rosen, 1991b), Fuzzy ART (Carpenter, Grossberg, & Rosen, 1990), and Fuzzy ARTMAP (Carpenter, Grossberg, Markuzon, & Reynolds, 1992). This article briefly summarizes the history and functionality of ART and its variants, and then describes the software package, which is available in the public domain

    A Photoreceptor Model that Replicates Human Light Adaptation Characteristics

    Full text link
    Whitehall (593-24); Office of Naval Research (N00014-95-1-0409); Universidad Nacional Autonoma de Mexico (Graduate Fellowship
    • …
    corecore